Goto

Collaborating Authors

 sexually explicit image


The EU is investigating Grok and X over potentially illegal deepfakes

Engadget

Apple could unveil Gemini-powered Siri in Feb. X's lack of controls potentially'expos[ed] citizens in the EU to serious harm,' regulators said. The EU is investigating Grok and X over'illegal' CSAM content Europe is probing Elon Musk's X for failing to take action to prevent the spread of AI-generated sexually explicit images including child sexual abuse material (CSAM), regulators said in a press release . The European Commission's investigation could result in "further enforcement steps" against X, not long after it levied a $140 million fine against the platform. "Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA [Digital Services Act], or whether it treated rights of European citizens -- including those of women and children -- as collateral damage of its service," said the Commission's executive VP, Henna Virkkunen in a statement.


EU launches inquiry into X over sexually explicit images made by Grok AI

The Guardian

The AI chatbot feature on X, Grok, was found by one study to have generated about 3m sexualised images in 11 days. The AI chatbot feature on X, Grok, was found by one study to have generated about 3m sexualised images in 11 days. Investigation comes after Elon Musk's firm sparked outrage by allowing users to'strip' photos of women and children The European Commission has launched an investigation into Elon Musk's X over the production of sexually explicit images and the spreading of possible child sexual abuse material by the platform's AI chatbot feature, Grok. The formal inquiry, launched on Monday, also extends an investigation into X's recommender systems, algorithms that help users discover new content. Grok has sparked international outrage by allowing users to digitally strip women and children and put them into provocative poses.


Man held in Japan on suspicion of creating female celeb deepfakes made with AI

The Japan Times

Tokyo police believe the man made about 20,000 sexually explicit images of 262 women, such as actors and idols, and amassed sales of ¥1.2 million between October last year and September this year. Tokyo police have arrested a 31-year-old man for allegedly creating fake sexual images of female celebrities with generative artificial intelligence technology and displaying them online, it was learned Thursday. It is the first time that police in Japan have cracked down on sexual deepfake images of celebrities created with generative AI. The suspect, Hiroya Yokoi of the city of Akita, has admitted he began making deepfakes to earn a small amount of money, which he used to cover living expenses and repay a student loan. Authorities believe Yokoi made a total of about 20,000 sexually explicit images of 262 women, such as actors, television personalities and idols, and amassed sales of ¥1.2 million between October last year and September this year.


High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

The Atlantic - Technology

For years now, generative AI has been used to conjure all sorts of realities--dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases--perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been. This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center's polling found, 15 percent of high schoolers reported hearing about a "deepfake"--or AI-generated image--that depicted someone associated with their school in a sexually explicit or intimate manner.


To protect children, we need to fill these gaps in AI policy

FOX News

Canopy CMO Yaron Litwin discusses how criminals are using deepfake technology to blackmail teens and generate child pornography. Today's hot trend for policymakers is talking about artificial intelligence. This incredibly powerful technology is here to stay, and new research shows that most of us are optimistic about how generative AI will be able to improve our lives. But there are some new and concerning threats to which policymakers must pay attention. This includes a horrific misuse of this positive tech: bad actors abusing AI to put real people in sexually explicit situations, including minors.